Skip to content

Conversation

@shen-shanshan
Copy link
Collaborator

@shen-shanshan shen-shanshan commented Aug 11, 2025

What this PR does / why we need it?

Fix #2033

Sync vllm-project/vllm#14702 to solve grammar_bitmask IndexError caused by outdated apply_grammar_bitmask method

Does this PR introduce any user-facing change?

No

How was this patch tested?

…sk method

Signed-off-by: shen-shanshan <467638484@qq.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @shen-shanshan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've addressed a critical bug in the apply_grammar_bitmask method within the model runner. This change resolves an IndexError by updating how grammar bitmasks are reordered and applied to logits, particularly when speculative decoding is active. The core purpose is to ensure that structured output generation remains stable and correct under various decoding scenarios, preventing crashes and improving the reliability of the system.

Highlights

  • Bug Fix for Grammar Bitmasking: I've fixed an IndexError that could occur in the apply_grammar_bitmask method. This was caused by an outdated approach to reordering the grammar bitmask, especially when speculative decoding was involved. The sorted_bitmask is now correctly sized based on logits.shape[0], preventing out-of-bounds access.
  • Improved Speculative Decoding Compatibility: The logic within apply_grammar_bitmask has been significantly revised to properly account for speculative decode tokens. This ensures that the grammar_bitmask is applied to all relevant tokens, including those generated speculatively, by correctly calculating logit indices with a cumulative_offset.
  • Refactored Bitmask Reordering: I've refactored the process of reordering the grammar_bitmask to align with the batch order. The new approach ensures that the bitmask accurately corresponds to the current batch's request and token sequence, improving the robustness of structured output generation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request provides a bugfix for an IndexError in the apply_grammar_bitmask method and adds support for speculative decoding. The changes appear correct and robust. The sorted_bitmask is now sized appropriately to match the logits tensor, which should resolve the reported error. The logic also correctly handles the variable number of tokens per request introduced by speculative decoding. My review includes two suggestions to improve the code's clarity and maintainability by using more descriptive variable names, which is important given the complexity of the indexing logic.

Comment on lines +1299 to +1301
seq = sorted(self.input_batch.req_id_to_index.items(),
key=lambda x: x[1])
for req_id, batch_index in seq:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For clarity and to avoid confusion, consider renaming seq to something more descriptive, like requests_in_batch_order. This is especially helpful because seq is reused later with a different meaning.

Suggested change
seq = sorted(self.input_batch.req_id_to_index.items(),
key=lambda x: x[1])
for req_id, batch_index in seq:
requests_in_batch_order = sorted(self.input_batch.req_id_to_index.items(),
key=lambda x: x[1])
for req_id, batch_index in requests_in_batch_order:

Comment on lines +1315 to +1317
seq = sorted(scheduler_output.structured_output_request_ids.items(),
key=lambda x: x[1])
for req_id, _ in seq:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable seq is reused from a previous loop. To improve readability and prevent potential bugs during future modifications, it's better to use a distinct and descriptive name, such as grammar_requests_in_mask_order.

Suggested change
seq = sorted(scheduler_output.structured_output_request_ids.items(),
key=lambda x: x[1])
for req_id, _ in seq:
grammar_requests_in_mask_order = sorted(scheduler_output.structured_output_request_ids.items(),
key=lambda x: x[1])
for req_id, _ in grammar_requests_in_mask_order:

@wangxiyuan wangxiyuan merged commit 8c7bc45 into vllm-project:v0.9.1-dev Aug 19, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants